I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors.
It’s perfectly OK to give low priors to strange beliefs, like: “Here is EY, a guy from internet who found a way to save the world, because all scientists are wrong. Everybody listen to him, take him seriously despite his lack of credentials, give him your money and spread his words.” However, low does not mean infinitely low. A hypothesis with a low prior can still be saved by sufficient evidence.
For example, a hypothesis that “Washington is a capital city of USA” has also a very low prior, since there are over 30 000 towns in USA, and only one of them could be a capital, so why exactly should I privilege the Washington hypothesis? But there happens to be more than enough evidence which overrides the initially weak prior.
So basicly the question is how much evidence does EY need so that it becomes rational to consider his thoughts seriously (which does not yet mean he is right); how exactly low is this prior? So… How many people on this planet are putting a comparable amount of time and study to the topic of values of artificial intelligence? Is he able to convince seemingly rational people, or is he followed by a bunch of morons? Is his criticism of scientific processes just an unsubstantiated school-dropout envy, or is he proven right? Etc. I don’t pretend to do a Bayesian calculation, it just seems to me that the prior is not that low, and there is enough evidence. (And by the way, Dmytry, your presence at this website is also a weak evidence, isn’t it? I guess there are millions of web pages that you do not read and comment regularly. There are even many computer-related or AI-related pages you don’t read, but you do read this one—why?)
It’s perfectly OK to give low priors to strange beliefs, like: “Here is EY, a guy from internet who found a way to save the world, because all scientists are wrong. Everybody listen to him, take him seriously despite his lack of credentials, give him your money and spread his words.” However, low does not mean infinitely low. A hypothesis with a low prior can still be saved by sufficient evidence.
For example, a hypothesis that “Washington is a capital city of USA” has also a very low prior, since there are over 30 000 towns in USA, and only one of them could be a capital, so why exactly should I privilege the Washington hypothesis? But there happens to be more than enough evidence which overrides the initially weak prior.
So basicly the question is how much evidence does EY need so that it becomes rational to consider his thoughts seriously (which does not yet mean he is right); how exactly low is this prior? So… How many people on this planet are putting a comparable amount of time and study to the topic of values of artificial intelligence? Is he able to convince seemingly rational people, or is he followed by a bunch of morons? Is his criticism of scientific processes just an unsubstantiated school-dropout envy, or is he proven right? Etc. I don’t pretend to do a Bayesian calculation, it just seems to me that the prior is not that low, and there is enough evidence. (And by the way, Dmytry, your presence at this website is also a weak evidence, isn’t it? I guess there are millions of web pages that you do not read and comment regularly. There are even many computer-related or AI-related pages you don’t read, but you do read this one—why?)